222 research outputs found

    Precision radiogenomics: fusion biopsies to target tumour habitats in vivo.

    Get PDF
    Funder: This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement no. 766030, the Cancer Research UK Cambridge Institute with core grant C14303/A17197, and the Mark Foundation for Cancer Research and Cancer Research UK Cambridge Centre (C9685/A25177).High-grade serous ovarian cancer lesions display a high degree of heterogeneity on CT scans. We have recently shown that regions with distinct imaging profiles can be accurately biopsied in vivo using a technique based on the fusion of CT and ultrasound scans

    Thoracic metastasis in advanced ovarian cancer: comparison between computed tomography and video-assisted thoracic surgery.

    Get PDF
    OBJECTIVE: To determine which computed tomography (CT) imaging features predict pleural malignancy in patients with advanced epithelial ovarian carcinoma (EOC) using video-assisted thoracic surgery (VATS), pathology, and cytology findings as the reference standard. METHODS: This retrospective study included 44 patients with International Federation of Obstetrics and Gynecology (FIGO) stage III or IV primary or recurrent EOC who had chest CT ≤30 days before VATS. Two radiologists independently reviewed the CT studies and recorded the presence and size of pleural effusions and of ascites; pleural nodules, thickening, enhancement, subdiaphragmatic tumour deposits and supradiaphragmatic, mediastinal, hilar, and retroperitoneal adenopathy; and peritoneal seeding. VATS, pathology, and cytology findings constituted the reference standard. RESULTS: In 26/44 (59%) patients, pleural biopsies were malignant. Only the size of left-sided pleural effusion (reader 1: rho=-0.39, p=0.01; reader 2: rho=-0.37, p=0.01) and presence of ascites (reader 1: rho=-0.33, p=0.03; reader 2: rho=-0.35, p=0.03) were significantly associated with solid pleural metastasis. Pleural fluid cytology was malignant in 26/35 (74%) patients. Only the presence (p=0.03 for both readers) and size (reader 1: rho=0.34, p=0.04; reader 2: rho=0.33, p=0.06) of right-sided pleural effusion were associated with malignant pleural effusion. Interobserver agreement was substantial (kappa=0.78) for effusion size and moderate (kappa=0.46) for presence of solid pleural disease. No other CT features were associated with malignancy at biopsy or cytology. CONCLUSION: In patients with advanced EOC, ascites and left-sided pleural effusion size were associated with solid pleural metastasis, while the presence and size of right-sided effusion were associated with malignant pleural effusion. No other CT features evaluated were associated with pleural malignancy

    Reproducibility of CT-based radiomic features against image resampling and perturbations for tumour and healthy kidney in renal cancer patients.

    Get PDF
    Computed Tomography (CT) is widely used in oncology for morphological evaluation and diagnosis, commonly through visual assessments, often exploiting semi-automatic tools as well. Well-established automatic methods for quantitative imaging offer the opportunity to enrich the radiologist interpretation with a large number of radiomic features, which need to be highly reproducible to be used reliably in clinical practice. This study investigates feature reproducibility against noise, varying resolutions and segmentations (achieved by perturbing the regions of interest), in a CT dataset with heterogeneous voxel size of 98 renal cell carcinomas (RCCs) and 93 contralateral normal kidneys (CK). In particular, first order (FO) and second order texture features based on both 2D and 3D grey level co-occurrence matrices (GLCMs) were considered. Moreover, this study carries out a comparative analysis of three of the most commonly used interpolation methods, which need to be selected before any resampling procedure. Results showed that the Lanczos interpolation is the most effective at preserving original information in resampling, where the median slice resolution coupled with the native slice spacing allows the best reproducibility, with 94.6% and 87.7% of features, in RCC and CK, respectively. GLCMs show their maximum reproducibility when used at short distances

    MADGAN: unsupervised medical anomaly detection GAN using multiple adjacent brain MRI slice reconstruction.

    Get PDF
    BACKGROUND: Unsupervised learning can discover various unseen abnormalities, relying on large-scale unannotated medical images of healthy subjects. Towards this, unsupervised methods reconstruct a 2D/3D single medical image to detect outliers either in the learned feature space or from high reconstruction loss. However, without considering continuity between multiple adjacent slices, they cannot directly discriminate diseases composed of the accumulation of subtle anatomical anomalies, such as Alzheimer's disease (AD). Moreover, no study has shown how unsupervised anomaly detection is associated with either disease stages, various (i.e., more than two types of) diseases, or multi-sequence magnetic resonance imaging (MRI) scans. RESULTS: We propose unsupervised medical anomaly detection generative adversarial network (MADGAN), a novel two-step method using GAN-based multiple adjacent brain MRI slice reconstruction to detect brain anomalies at different stages on multi-sequence structural MRI: (Reconstruction) Wasserstein loss with Gradient Penalty + 100 [Formula: see text] loss-trained on 3 healthy brain axial MRI slices to reconstruct the next 3 ones-reconstructs unseen healthy/abnormal scans; (Diagnosis) Average [Formula: see text] loss per scan discriminates them, comparing the ground truth/reconstructed slices. For training, we use two different datasets composed of 1133 healthy T1-weighted (T1) and 135 healthy contrast-enhanced T1 (T1c) brain MRI scans for detecting AD and brain metastases/various diseases, respectively. Our self-attention MADGAN can detect AD on T1 scans at a very early stage, mild cognitive impairment (MCI), with area under the curve (AUC) 0.727, and AD at a late stage with AUC 0.894, while detecting brain metastases on T1c scans with AUC 0.921. CONCLUSIONS: Similar to physicians' way of performing a diagnosis, using massive healthy training data, our first multiple MRI slice reconstruction approach, MADGAN, can reliably predict the next 3 slices from the previous 3 ones only for unseen healthy images. As the first unsupervised various disease diagnosis, MADGAN can reliably detect the accumulation of subtle anatomical anomalies and hyper-intense enhancing lesions, such as (especially late-stage) AD and brain metastases on multi-sequence MRI scans

    Calibrating Ensembles for Scalable Uncertainty Quantification in Deep Learning-based Medical Segmentation

    Full text link
    Uncertainty quantification in automated image analysis is highly desired in many applications. Typically, machine learning models in classification or segmentation are only developed to provide binary answers; however, quantifying the uncertainty of the models can play a critical role for example in active learning or machine human interaction. Uncertainty quantification is especially difficult when using deep learning-based models, which are the state-of-the-art in many imaging applications. The current uncertainty quantification approaches do not scale well in high-dimensional real-world problems. Scalable solutions often rely on classical techniques, such as dropout, during inference or training ensembles of identical models with different random seeds to obtain a posterior distribution. In this paper, we show that these approaches fail to approximate the classification probability. On the contrary, we propose a scalable and intuitive framework to calibrate ensembles of deep learning models to produce uncertainty quantification measurements that approximate the classification probability. On unseen test data, we demonstrate improved calibration, sensitivity (in two out of three cases) and precision when being compared with the standard approaches. We further motivate the usage of our method in active learning, creating pseudo-labels to learn from unlabeled images and human-machine collaboration
    corecore